We can never be certain that a software system is correct simply by testingit, but with every additional successful test we become less uncertain aboutits correctness. In absence of source code or elaborate specifications andmodels, tests are usually generated or chosen randomly. However, rather thanrandomly choosing tests, it would be preferable to choose those tests thatdecrease our uncertainty about correctness the most. In order to guide testgeneration, we apply what is referred to in Machine Learning as "Query StrategyFramework": We infer a behavioural model of the system under test and selectthose tests which the inferred model is "least certain" about. Running thesetests on the system under test thus directly targets those parts about whichtests so far have failed to inform the model. We provide an implementation thatuses a genetic programming engine for model inference in order to enable anuncertainty sampling technique known as "query by committee", and evaluate iton eight subject systems from the Apache Commons Math framework and JodaTime.The results indicate that test generation using uncertainty samplingoutperforms conventional and Adaptive Random Testing.
展开▼